05. Getting to Know Your Simulator
Before we get started learning the various aspects of a PID controller. Lets talk take some time to walk you through some things you should know, the code simulator you are going to use and how it works!
Classes!
Before we get started building this controller it is recommended that you have a brief overview of classes and how they work in Python. We are offering two options to those who may not have experience or would like a refresher on classes and object oriented thinking!
First Option: A fast YouTube video crash course!
Second Option: Udacity's own hands on course, to really nail the concept into your head!
You will learn more by taking the Udacity course, but we know that some of you are on a time crunch so we also included the YouTube crash course. If you have any other questions or want to learn more, reach out to your fellow classmates on Slack to ask them about things like: Encapsulation, Inheritance, Polymorphism, and Abstraction!
Experimenting with an Open Loop Controller!
We are going to start by talking about the graphs you are going to see outputted in every rendition of the future controllers that you are going to build. Specifically, what they mean and the ideal situation for each!
Let's start with the two graphs seen below. The first graph is a plot of the quadrotor's altitude vs time. You can see that we also graph our desired set point of 10 meters. The ultimate goal is to get to that set point as fast as possible and to stay there with minimum oscillations and overshoot. The second graph is the speed of the quad vs time. The ideal version of this graph would be an increase of speed until we reach the set point and a fast drop off of speed once we get there. This is because once we get to 10 meters we want to stay there and go nowhere else!
Here we see a continuous control effort of 1.7 being applied to the quad. We can see that this is not a good solution as the quad would continue to rise past the set point. We will be working to build a better controller that fluctuates the control effort in order to keep us at our set point.
The next graph that we will be working with is control effort vs time. Control effort is the amount of effort that the actuator needs to apply in order to reach the set point. Ideally, with a good controller excluding initial actuation to reach a set point we want to minimize the control effort overall by having a well tuned controller.
Here we can see with a continuous control effort of 1.7 that it is never minimized and that is not efficient or practical!
Take sometime to explore the quiz below. This quiz is not graded and requires no changes to complete it. However, becoming familiar with the codebase will help you complete future quizzes!
Start Quiz:
import numpy as np
import matplotlib.pyplot as plt
from open_controller import Open_Controller
from quad1d_eom import ydot
##################################################################################
##################################################################################
# Here we are going to apply a continuous and constant control effort with a value
# of 1.7!
control_effort = 1.7
##################################################################################
##################################################################################
# Simulation parameters
N = 500 # number of simulation points
t0 = 0 # starting time, (sec)
tf = 30 # end time, (sec)
time = np.linspace(t0, tf, N)
dt = time[1] - time[0] # delta t, (sec)
##################################################################################
# Core simulation code
# Inital conditions (i.e., initial state vector)
y = [0, 0]
#y[0] = initial altitude, (m)
#y[1] = initial speed, (m/s)
# Initialize array to store values
soln = np.zeros((len(time),len(y)))
# Create instance of Open_Controller class
controller = Open_Controller()
# Set our contstant control effort
controller.setControlEffort(control_effort)
# Set altitude target
r = 10 # meters
controller.setTarget(r)
# Simulate quadrotor motion
j = 0 # dummy counter
for t in time:
# Evaluate state at next time point
y = ydot(y,t,controller)
# Store results
soln[j,:] = y
j += 1
##################################################################################
# Plot results
# Plot 1: This is the altitude of our quad copter as a function of time!
SP = np.ones_like(time)*r # altitude set point
fig = plt.figure()
ax1 = fig.add_subplot(211)
ax1.plot(time, soln[:,0],time,SP,'--')
ax1.set_xlabel('Time, (sec)')
ax1.set_ylabel('Altitude, (m)')
# Plot 2: This is the speed of our quad copter as a function of time!
ax2 = fig.add_subplot(212)
ax2.plot(time, soln[:,1])
ax2.set_xlabel('Time, (sec)')
ax2.set_ylabel('Speed, (m/s)')
plt.tight_layout()
plt.show()
# Plot 3: This is the control effort applied to our quad copter as a function of time!
fig2 = plt.figure()
ax3 = fig2.add_subplot(111)
ax3.plot(time, controller.effort_applied, label='effort', linewidth=3, color = 'red')
ax3.set_xlabel('Time, (sec)')
ax3.set_ylabel('Control Effort')
h, l = ax3.get_legend_handles_labels()
ax3.legend(h, l)
plt.tight_layout()
plt.show()
##################
y0 = soln[:,0] #altitude
rise_time_index = np.argmax(y0>r)
RT = time[rise_time_index]
print("The rise time is {0:.3f} seconds".format(RT))
OS = (np.max(y0) - r)/r*100
if OS < 0:
OS = 0
print("The percent overshoot is {0:.1f}%".format(OS))
print ("The offset from the target at 30 seconds is {0:.3f} meters".format(abs(soln[-1,0]-r)))
##################################################################################
# This will be our control class that over the course of the next lessons will
# become more advance!
##################################################################################
# Create an Open_Controller class!
class Open_Controller:
# Define the class initalization sequence!
def __init__(self, start_time = 0):
# Create a class variable to store the start time!
self.start_time_ = start_time
# Create a class variable to store the control effort!
self.u = 0
# Create a class variable to store the last timestamp!
self.last_timestamp_ = 0
# Create a class variable to store our set point!
self.set_point_ = 0
# Create a class variable to all applied control efforts!
self.effort_applied = []
# Set the altitude set point
def setTarget(self, target):
self.set_point_ = float(target)
# Set the desired control effort
def setControlEffort(self, control_effort):
self.u = float(control_effort)
# Retrive the current control effort
def getControlEffort(self,time):
# Store the last time stamp!
self.last_timestamp_ = time
# Store control effort applied!
self.effort_applied.append(self.u)
return self.u
import numpy as np
import matplotlib.pyplot as plt
from open_controller import Open_Controller
##################################################################################
## DO NOT MODIFY ANY PORTION OF THIS FILE
# This file represents the dynamical equations of motion for the 1D quadrotor
##################################################################################
def ydot(y, t, controller):
''' Returns the state vector at the next time-step
Parameters:
----------
y(k) = state vector, a length 2 list
= [altitude, speed]
t = time, (sec)
pid = instance of the PIDController class
Returns
-------
y(k+1) = [y[0], y[1]] = y(k) + ydot*dt
'''
# Model state
y0 = y[0] # altitude, (m)
y1 = y[1] # speed, (m/s)
# Model parameters
g = -9.81 # gravity, m/s/s
m = 1.54 # quadrotor mass, kg
c = 10.0 # electro-mechanical transmission constant
# time step, (sec)
dt = t - controller.last_timestamp_
# Control effort
u = controller.getControlEffort(t)
### State derivatives
if (y0 <= 0.):
# if control input, u <= gravity, vehicle stays at rest on the ground
# this prevents quadrotor from "falling" through the ground when thrust is
# too small.
if u <= np.absolute(g*m/c):
y0dot = 0.
y1dot = 0.
else: # else if u > gravity and quadrotor accelerates upwards
y0dot = y1
y1dot = g + c/m*u - 0.75*y1
else: # otherwise quadrotor is already in the air
y0dot = y1
y1dot = g + c/m*u - 0.75*y1
y0 += y0dot*dt
y1 += y1dot*dt
return [y0, y1]
Check your understanding!
QUESTION:
After you have read through the code tabs above, what is the minimum control effort need for the quad to successfully take off? Please answer with exactly 3 significant figures.
SOLUTION:
NOTE: The solutions are expressed in RegEx pattern. Udacity uses these patterns to check the given answer
Reflect
QUESTION:
What is the significance of this value? How is it determined?
ANSWER:
This value is enough to overcome gravity given the supplied electro-mechanical transmission constant. We find it being calculated as np.absolute(g*m/c)